Computational results (preliminary version)

This report is a preliminary version of the computational results included in the paper

Nicolas Forget http://pure.au.dk/portal/en/nforget@econ.au.dk (CORAL, BSS, Aarhus University)https://econ.au.dk/coral , Lars Relund Nielsen http://pure.au.dk/portal/en/larsrn@econ.au.dk (CORAL, BSS, Aarhus University)https://econ.au.dk/coral , Sune Lauth Gadegaard http://pure.au.dk/portal/en/sgadegaard@econ.au.dk (CORAL, BSS, Aarhus University)https://econ.au.dk/coral
2020-08-27

Table of Contents


This report is a preliminary version of the computational results included in the paper. Only instances generated with option spheredown are used since they seems to be hard instances.

In this section we report on the computational experiments conducted with the tri–objective branch–and–bound algorithm.

All instances are converted to minimization problems meaning that if an objective function \(z(x)\) should be maximized, we minimize \(−z(x)\) instead.

The purpose of the computational study is to answer the following questions:

  1. What is the performance of the different algorithm configurations and which configurations perform the best ? In particular, is it worth doing objective branching ?
  2. Why does objective branching perform as it does ?
  3. How is the performance of the tri-objective B&B algorithm compared to objective space search algorithms?

Impelmentation details and algorithm configurations

All algorithms have been implemented in Julia \(1.0.1\). The experiments was done a computer with an Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz processor and 16GB of RAM memory, using Linux Ubuntu 14.04 LTS.

In order to compute the linear relaxation at each node, the solver Bensolve is used (Lohne and Weibing, n.d.). To avoid reading from and writing to text files at each node of the tree, we have implemented a wrapper that calls Bensolve, retrieve some outputs and insert them directly in our code as matrices. Numerical instabilities leading to missing non-dominated points in the final output have been detected while building the hyperplanes representation of the lower bound set. The matrix used when finding the normal to each hyperplane may in a few cases be close to singular (its determinant is close to zero). Hence a small value eps has been introduced so that if the determinant is less than or equal eps, then the hyperplane is discarded, thus leaving a weaker but valid lower bound set. For more information, we refer the reader to the section about numerical instabilities in Forget, Nielsen, and Gadegaard (2020). We have used eps \(= 0.001\) in all tests reported in this paper resulting in that no missing non-dominated points have been detected.

The variable selected in Step 5 of the algorithm differs depending on whether objective branching is applied or not. If no objective branching is performed, the algorithm will branch on the free variable that is the most often fractional among the extreme points of the lower bound set, given that at least one of the variables has a fractional value. If no variable has a fractional value in any of the extreme points, the variable that is the most often different (i.e. with the average value closest to \(0.5\)) is chosen. If objective branching is enabled, the rule is the same, except that a different variable may be chosen in each sub-problem. Indeed, given a sub-problem \(P(\eta,s)\) in the objective space, only the extreme points of the lower bound set included in \(P(\eta,s)\) (i.e. that dominates \(s\)) will be considered. In case there are multiple possible choices or no extreme point included in \(P(\eta,s)\), the variable with the smallest index will be chosen.

To test different algorithm configurations we use the following parameters:

This leads to 6 configurations: B|N, B|E, B|C, D|N, D|E and D|C. For an overview of more configurations such as different variable selection rules (Step 5 of the algorithm), the reader is referred to Forget, Nielsen, and Gadegaard (2020).

Test instances

Table 1: Instances used (\(n\): number of variables, #: number of instances given variable size).
\(n\) #
AP 36, 49, 64, 121 10.0
KP 10, 15, 20, 25, 30, 35, 40 29.9
UFLP 30, 42, 56, 72 10.0

A total of 289 instances (see Table 1) has been generated. Three problem classes are considered: the linear assignment problem (AP), the knapsack problem (KP) and the uncapacitated facility location problem (UFLP). The mathematical programming formulation for each problem is given in the appendix. The number of variables in each problem class has been increased until no algorithm configurations was able to compute an exact solution within a time limit of half an hour (1800 seconds).

The objective coefficients are generated in the range [1, 1000] on the lower part of a 3D sphere using the R package Relund Nielsen (2020). Hence a high number of the coefficient vectors for the variables are non-dominated among each other (91% for AP and 96% for KP). This way of generating the objective coefficients has been tested against other methods and seems to result in solutions with a high number of non-dominated points (Forget, Nielsen, and Gadegaard 2020) (implying hard instances). For UFLP instances the same generation method has been used. However, since two cost groups exists (see the appendix), a range of [1, 1000] has been used for generating the cost of assigning a customer to a service point and a range of [1, 100] for generating the cost for opening a service point. The objective coefficients are all integer.

For the AP the constraints are fixed given the problem size. The same holds for the UFLP when assuming the number of facilities equals the number of customers. For KP instances, the integer coefficients of the constraint are generated randomly in the range [1,15]. The right-hand side is set equal to half of the sum of the coefficients on the right hand side [LRN: Rounded???]. For KP three different constraints are generated for each variable size and objective coefficients.

All algorithm configurations have been tested using a time limit of half an hour (1800 seconds).

Performance of the different algorithm configurations

First, we rank the configurations with respect to mean cpu time for all instances, the sequence from best to worst becomes B|C (0%), D|C (15%), B|N (16%), D|N (33%), B|E (51%), D|E (79%) where the increase in percent compared to the best configuration is given in parentheses. Note that the mean cpu times calculated is in fact a lower bound due to the total run time limit.

Second, a comparison of the different algorithm configurations given problem class can be seen in Figure 1. Remark that we have increased the variable size for each problem class until the size becomes so big that some instances cannot be solved within the time limit. That is, the number of instances solved before the time limit are under 100%.

Performance profile: Number of instances in percent solved given cpu time. An instance is considered as unsolved if the cpu time exceed 1800 seconds (time limit).

Figure 1: Performance profile: Number of instances in percent solved given cpu time. An instance is considered as unsolved if the cpu time exceed 1800 seconds (time limit).

Some observations based on Figure 1 are:

These observations will be explored with more details in the next sections.

Objective branching: a closer look

For taking a closer look at the different objective branching configurations we limit us to the set of instances which have been solved to optimality for all algorithm configurations. Detailed summary statistics are given in Table 2.

Table 2: Detailed results for all instances which have been solved to optimality for all algorithm configurtions.
B|C
B|E
B|N
D|C
D|E
D|N
na #b |YN|c cpud cpuLBe nodesf pruneg cpud cpuLBe nodesf pruneg cpud cpuLBe nodesf pruneg cpud cpuLBe nodesf pruneg cpud cpuLBe nodesf pruneg cpud cpuLBe nodesf pruneg
AP
36 10 160 (14/0/86) 2.8 [1.9,3.5] 2.3 1540 [3,18,27] [83,11,6] 7.8 [3.5,11.2] 2.7 2594 [3,17,26] [91,5,4] 1.4 [1.1,1.7] 1.5 807 [6,11,16] [0,74,26] 3 [2.1,3.7] 2.2 1783 [3,19,27] [83,12,5] 8.4 [3.5,14.7] 2.7 2640 [3,18,27] [91,7,2] 1.4 [1.1,1.6] 1.5 831 [6,11,16] [0,76,24]
49 10 348 (8/0/92) 14.9 [11.4,21.4] 3.3 4933 [3,25,38] [82,8,11] 101.2 [35.9,246.8] 3.6 11225 [3,21,37] [92,3,5] 7.8 [6,10] 2.4 2573 [7,13,20] [0,59,41] 16.1 [12.3,23.4] 3.2 6170 [3,26,38] [83,9,8] 152.7 [58.1,302.7] 3.6 12037 [3,22,38] [94,3,2] 7.6 [6,9.8] 2.3 2847 [7,13,21] [0,62,38]
64 3 587 (7/0/93) 51.4 [47.5,57.5] 4.7 11489 [3,29,51] [78,6,16] 565.5 [498,676.2] 4.4 34192 [3,24,50] [92,1,7] 28.4 [24.8,35] 3.6 6182 [8,15,27] [0,46,54] 72.5 [68,76.8] 4.5 19421 [3,32,52] [82,7,11] 969.3 [878.6,1058.7] 4.7 44736 [3,27,51] [95,2,3] 28.2 [23.4,35.4] 3.4 7166 [8,15,27] [0,50,50]
23 298 (9/0/91) 14.4 [1.9,57.5] 3.1 4313 [3,23,35] [82,9,9] 121.1 [3.5,676.2] 3.3 10468 [3,20,34] [92,4,5] 7.7 [1.1,35] 2.2 2276 [7,12,19] [0,64,36] 17.8 [2.1,76.8] 2.9 5991 [3,24,35] [83,10,7] 196.5 [3.5,1058.7] 3.3 12216 [3,21,35] [93,5,2] 7.6 [1.1,35.4] 2.1 2534 [7,12,20] [0,66,34]
KP
10 30 43 (28/0/72) 0.2 [0.1,0.3] 0.7 315 [5,9,11] [63,28,8] 0.2 [0.1,0.4] 0.9 350 [5,9,11] [71,22,6] 0.2 [0.1,0.3] 0.6 336 [5,9,11] [28,46,26] 0.2 [0.1,0.4] 0.7 371 [5,9,11] [64,32,4] 0.3 [0.1,0.7] 0.8 404 [5,9,11] [71,26,3] 0.2 [0.1,0.3] 0.6 393 [5,9,11] [33,49,18]
15 30 145 (16/0/84) 2.4 [0.9,5.5] 1.1 2238 [7,12,16] [67,19,14] 3.7 [1.3,9.2] 1.4 2951 [6,11,16] [79,12,10] 2.9 [1.3,5] 1.1 2499 [7,13,16] [12,43,45] 3.2 [1.5,7.1] 1 3117 [7,13,16] [70,21,9] 6.7 [2,19] 1.3 3910 [6,12,16] [81,14,5] 3.4 [1.9,5.7] 1 3540 [7,13,16] [19,46,35]
20 30 329 (10/0/90) 18.2 [2.5,43.8] 1.5 10335 [7,16,21] [72,12,16] 35.6 [3.3,147.2] 1.9 16899 [6,14,21] [84,6,11] 26.4 [5,50.6] 1.8 12484 [7,16,21] [9,43,49] 28.5 [3.2,71.8] 1.4 18618 [7,16,21] [75,14,11] 103.5 [4.4,413.6] 1.8 27738 [6,15,21] [88,7,5] 39.3 [6.7,83.6] 1.5 23616 [7,17,21] [14,43,43]
25 29 550 (8/0/92) 58.1 [23.5,107.6] 2.1 22251 [7,18,26] [70,9,20] 111.2 [38.1,218.9] 2.5 39468 [6,16,26] [84,4,12] 102.1 [46.4,184.8] 3 27918 [7,19,26] [3,41,56] 98.8 [41.6,184.5] 1.8 50927 [8,19,26] [75,10,15] 366.2 [89,867.4] 2.3 81458 [6,17,26] [90,5,6] 182.2 [82.2,323.8] 2.4 68127 [7,20,26] [6,44,50]
30 10 752 (7/0/93) 182.2 [101.6,250.9] 3 42622 [8,20,31] [71,7,22] 302.9 [127.2,424.3] 3.3 76974 [6,17,31] [85,3,12] 317.7 [196.3,395] 5 53069 [8,21,31] [2,47,52] 247.4 [147,325.3] 2.4 102150 [8,21,31] [76,9,15] 896.6 [185.9,1401.7] 2.8 164866 [5,19,31] [90,4,6] 621.1 [323.1,837.5] 4 138892 [7,23,31] [4,48,48]
129 302 (10/0/90) 32 [0.1,250.9] 1.5 11303 [7,14,19] [68,17,15] 57.7 [0.1,424.3] 1.8 19537 [6,13,19] [80,10,10] 54.4 [0.1,395] 1.9 13953 [7,15,19] [12,44,44] 48.8 [0.1,325.3] 1.3 24508 [7,15,19] [71,19,10] 177.5 [0.1,1401.7] 1.6 38546 [6,14,19] [83,12,5] 99.1 [0.1,837.5] 1.6 32489 [7,15,19] [17,46,37]
UFLP
30 10 325 (14/0/86) 8.4 [6.2,12.4] 2.7 3157 [3,19,26] [75,16,9] 33.4 [18.3,64.1] 3.3 6158 [3,17,26] [89,6,5] 9.4 [5.9,14.5] 2.1 3517 [7,14,21] [0,65,35] 8.9 [6.1,12.3] 2.6 3868 [3,20,26] [75,18,7] 47.2 [25.3,108] 3.1 6758 [3,17,26] [90,8,3] 9.4 [6.8,15.3] 1.9 4270 [7,14,22] [0,70,30]
42 4 900 (8/0/92) 59 [50.7,70.3] 4.1 13026 [4,25,39] [76,10,14] 712.8 [293.1,1153.5] 4.1 34729 [4,21,38] [91,3,6] 90.4 [72.1,113.8] 4.1 16508 [8,17,32] [0,50,50] 61.1 [48.9,73.2] 3.8 17300 [4,25,39] [77,12,10] 1288.7 [580.4,1783.3] 4.1 39028 [4,22,39] [93,4,3] 110 [80.3,139.7] 3.6 25250 [8,18,32] [0,55,45]
14 489 (11/0/89) 22.9 [6.2,70.3] 3.1 5977 [4,21,30] [75,14,10] 227.5 [18.3,1153.5] 3.5 14321 [4,18,30] [90,5,5] 32.5 [5.9,113.8] 2.7 7229 [7,15,24] [0,61,39] 23.8 [6.1,73.2] 3 7705 [4,21,30] [76,17,8] 401.9 [25.3,1783.3] 3.4 15978 [4,19,30] [91,7,3] 38.1 [6.8,139.7] 2.4 10264 [7,15,25] [0,66,34]
a Number of variables.
b Number of instances.
c Avg. number of non-dominated points (supported extreme, supported non-extreme and unsupported in percent).
d Avg. cpu time (seconds). Square brackets contains the range.
e Avg. cpu time (miliseconds) used to find the LB set per node where LB set found.
f Avg. number of nodes in the branching tree. Square brackets contains the min, avg. and max depth of leaf nodes.
g Percentages of leaf nodes pruned by infisibility, optimality and dominance.

Figure 1 illustrates that using exact objective branching (E) is not efficient with respect to cpu time. As explained in Section 4.3, exact objective branching may create several sub-problems in the objective space resulting in more nodes in the branch and bound tree. This may lead to higher computational times if the algorithm cannot explore them all fast enough. Indeed, this can be seen in Figure 2 where E configurations produce more nodes in the tree compared to C and N given same node selection rule. This observation makes sense, as the purpose of objective branching as described in Algorithm 2 is to produce more sub-problems and thus get wider trees with a smaller depth.

Average branching tree size.

Figure 2: Average branching tree size.

Furthermore, under exact objective branching the lower bound sets are harder to calculate [LRN: we need to add a statistic cpu/LP solved]. This indicates that the sub-problems generated in the E configurations tends to be harder to solve due to a more complicated set of constraints.

Another way to improve and possibly make E competitive would be to make more use of the local information provided by the sub-problems in the objective space. For example, as presented in [I have to find ref again, it’s somewhere in my references list], pre-processing in the multi-objective case has much less impact than in the single-objective case, as the efficient solutions may differ a lot. However, one could imagine that the solutions are more likely to be close in a specific part of the objective space and thus, by localising the branch and bound, pre-processing or introducing cuts may have a significantly more impact.

Except for AP, the C configurations are performing better than the N configurations. A first explanation of the efficiency of C for KP and UFLP is that for a given node \(\eta\), the C configurations will focus on a specific part of the objective space (a cone defined by \(d^N(\eta)\)). Thus, the lower bound sets tends to be less complex (some areas of the objective space are discarded before computation, while it is fully considered in the N version), which means possibly spending less time in the most expensive part of the branch and bound (computing the lower bound sets).

In addition to that, the way nodes are fathomed is highly influenced. Indeed, as it can be seen in Table 2, in the C configurations, the proportion of nodes fathomed by infeasibility tends to be much higher than for the N configurations. Besides, fathoming a node by infeasibility is faster than fathoming a node by optimality or by dominance. Indeed, to fathom by dominance or optimality, having the lower bound set computed is required. On the contrary, testing the feasibility of the sub-problem is the first thing done when a node is being explored and thus, a node fathomed by infeasibility is explored faster (see [report] for how much faster it is in practice).

The case of AP is a special. Indeed, in the N version, as the single-objective AP is in P, all the extreme points of the lower bound sets will always be integer. Thus, when a variable is chosen for branching, the special branching rule is triggered: the one chosen is the variable that is the most often different. However, in the C configurations, due to the fact that new constraints are added, the extreme points may have fractional values and thus, the branching rule differs from N configurations. It seems to have a significant impact on the branch and bound tree, which make the N configurations be the best.

In conclusion, it seems that how objective branching behave and whether it is efficient or not is very sensitive to the choices made in the other components of the branch and bound tree (variable selection, lower bound set, pre-processing…). The reader is refered to [report] to see more about that.

Node selection: breadth vs depth

Comparing the branch and bound algorithm with an objective space search algorithm

The best configuration of our branch and bound framework will now be compared to an efficient objective space search algorithm from the litterature [ref PhD Tamby]. Note that the goal of this paper is more like reaching a new step in making the branch and bound efficient in the multi-objective case by extending a concept that made branch and bounds more efficient in the bi-objective case, than to design an algorithm that performs better than objective space search algorithms on a specific set of instances. However, as the goal for futur research is to obtain a competitive branch and bound algorithm in the multi-objective case, we still want to compare to an objective space search algorithm, to see how far this goal is.

The principle of the algorithm from [ref Tamby] is to explore sub-problems that are defined by the current set of local upper bounds, that itself evolve when new non-dominated points are found. Using this approach, the authors expose a way to warmstart the MIP solver for each sub-problem expored as well as rules that discard sub-problems without even solving them, which both greatly improve the computational time. They show in their experiments that their algorithm performs better than some of the classical objective space search algorithm from the litterature.

As the LP solver in Bensolve is GLPK, the MIP solver that we will use for this algorithm is GLPK as well. Due to a technical constraint in the chosen programming language (Julia), the MIP solver could not be warmstarted. Hence, one should keep in mind that the cpu times for the objective space search algorithm are upper bounds on their actual cpu time.

[LRN: performance profile -> best config B&B vs Objective Space Search (OSS) algorithm. See stat_OSS.csv, columns “solved” (1 if instance solved, 0 otherwise) and “cpu” (cpu time expressed in seconds) for OSS algorithm. One performance profile per problem class ?]

[NF: a few words about the results]

–>

–>

–> –> –> –> –> –> –> –> –> –> –> –> –> –>

–>

–> –> –> –> –> –> –> –> –> –> –> –> –> –>

Things that are currently not used

Reverse performance plot for the different algorithm configurations are given in Figure 3.
Reverse performance plots for the different problem classes

Figure 3: Reverse performance plots for the different problem classes

Table 3: Results for each instance group.
B|C
D|C
B|N
D|N
B|E
D|E
n # YN (se, sne, us) cpu win nodes (d leaf) prune (I/O/D) cpu win nodes (d leaf) prune (I/O/D) cpu win nodes (d leaf) prune (I/O/D) cpu win nodes (d leaf) prune (I/O/D) cpu win nodes (d leaf) prune (I/O/D) cpu win nodes (d leaf) prune (I/O/D)
AP
36 10 160 (14/0/86) 2.8 [1.9,3.5] 0 1540 [3,18,27] [83,11,6] 3 [2.1,3.7] 0 1783 [3,19,27] [83,12,5] 1.4 [1.1,1.7] 5 807 [6,11,16] [0,74,26] 1.4 [1.1,1.6] 5 831 [6,11,16] [0,76,24] 7.8 [3.5,11.2] 0 2594 [3,17,26] [91,5,4] 8.4 [3.5,14.7] 0 2640 [3,18,27] [91,7,2]
49 10 348 (8/0/92) 14.9 [11.4,21.4] 0 4933 [3,25,38] [82,8,11] 16.1 [12.3,23.4] 0 6170 [3,26,38] [83,9,8] 7.8 [6,10] 3 2573 [7,13,20] [0,59,41] 7.6 [6,9.8] 7 2847 [7,13,21] [0,62,38] 101.2 [35.9,246.8] 0 11225 [3,21,37] [92,3,5] 152.7 [58.1,302.7] 0 12037 [3,22,38] [94,3,2]
64 10 741 (5/0/95) 67.2 [47.5,92.5] 0 13811 [3,30,52] [79,6,15] 82.3 [66.8,114] 0 20584 [3,33,53] [82,7,11] 34.5 [24.8,44.8] 5 7414 [8,15,26] [0,47,53] 34 [23.4,43.5] 5 8558 [8,15,27] [0,51,49] 1171.6* [498,1800] 0 43664 [3,26,51] [92,1,7] 1550.8* [878.6,1800] 0 47007 [3,27,52] [95,2,3]
30 416 (7/0/93) 28.3 [1.9,92.5] 0 6761 [3,25,39] [81,8,11] 33.8 [2.1,114] 0 9512 [3,26,39] [83,9,8] 14.6 [1.1,44.8] 13 3598 [7,13,21] [0,60,40] 14.3 [1.1,43.5] 17 4079 [7,13,21] [0,63,37] 426.9* [3.5,1800] 0 19161 [3,21,38] [92,3,5] 570.6* [3.5,1800] 0 20561 [3,22,39] [94,4,2]
KP
10 30 43 (28/0/72) 0.2 [0.1,0.3] 25 315 [5,9,11] [63,28,8] 0.2 [0.1,0.4] 0 371 [5,9,11] [64,32,4] 0.2 [0.1,0.3] 5 336 [5,9,11] [28,46,26] 0.2 [0.1,0.3] 0 393 [5,9,11] [33,49,18] 0.2 [0.1,0.4] 0 350 [5,9,11] [71,22,6] 0.3 [0.1,0.7] 0 404 [5,9,11] [71,26,3]
15 30 145 (16/0/84) 2.4 [0.9,5.5] 22 2238 [7,12,16] [67,19,14] 3.2 [1.5,7.1] 1 3117 [7,13,16] [70,21,9] 2.9 [1.3,5] 5 2499 [7,13,16] [12,43,45] 3.4 [1.9,5.7] 2 3540 [7,13,16] [19,46,35] 3.7 [1.3,9.2] 0 2951 [6,11,16] [79,12,10] 6.7 [2,19] 0 3910 [6,12,16] [81,14,5]
20 30 329 (10/0/90) 18.2 [2.5,43.8] 30 10335 [7,16,21] [72,12,16] 28.5 [3.2,71.8] 0 18618 [7,16,21] [75,14,11] 26.4 [5,50.6] 0 12484 [7,16,21] [9,43,49] 39.3 [6.7,83.6] 0 23616 [7,17,21] [14,43,43] 35.6 [3.3,147.2] 0 16899 [6,14,21] [84,6,11] 103.5 [4.4,413.6] 0 27738 [6,15,21] [88,7,5]
25 30 564 (8/0/92) 60.8 [23.5,137.7] 29 22963 [7,18,26] [70,9,20] 104.4 [41.6,267.2] 1 53473 [8,19,26] [75,10,15] 104.7 [46.4,184.8] 0 28759 [7,19,26] [3,41,56] 189.8 [82.2,410.4] 0 71319 [7,20,26] [6,44,50] 117.1 [38.1,288.4] 0 40906 [6,16,26] [84,4,12] 414* [89,1800] 0 85434 [6,17,26] [90,5,6]
30 30 1049 (6/0/94) 307.8 [101.6,650.6] 30 64778 [8,21,31] [70,7,23] 508.6 [147,1130.2] 0 179463 [8,22,31] [75,8,16] 523.2 [196.3,1018.1] 0 82540 [8,22,31] [2,43,55] 1065.9* [323.1,1800] 0 241410 [7,24,31] [4,47,49] 820.6* [127.2,1800] 0 126692 [6,18,30] [85,2,13] 1498.9* [185.9,1800] 0 184964 [6,20,31] [91,3,6]
35 29 1841 (3/0/97) 1312* [633.9,1800] 28 132088 [8,22,32] [65,5,30] 1670.8* [785.1,1800] 1 367515 [8,25,36] [75,6,19] 1734.2* [1317.3,1800] 0 111157 [8,22,27] [0,23,76] 1800* [1800,1800] 0 239201 [9,27,36] [4,36,60] 1800* [1800,1800] 0 53288 [6,13,15] [74,0,26] 1800* [1800,1800] 0 196300 [7,23,36] [92,2,6]
179 655 (6/0/94) 277.8* [0.1,1800] 164 38265 [7,16,23] [68,13,19] 378.8* [0.1,1800] 3 102286 [7,17,23] [72,15,12] 391.1* [0.1,1800] 10 39230 [7,17,22] [9,40,51] 509.3* [0.1,1800] 2 95783 [7,18,23] [13,44,42] 455.4* [0.1,1800] 0 40108 [6,13,20] [80,8,13] 630.7* [0.1,1800] 0 82493 [6,16,23] [86,10,5]
UFLP
30 10 325 (14/0/86) 8.4 [6.2,12.4] 3 3157 [3,19,26] [75,16,9] 8.9 [6.1,12.3] 3 3868 [3,20,26] [75,18,7] 9.4 [5.9,14.5] 3 3517 [7,14,21] [0,65,35] 9.4 [6.8,15.3] 1 4270 [7,14,22] [0,70,30] 33.4 [18.3,64.1] 0 6158 [3,17,26] [89,6,5] 47.2 [25.3,108] 0 6758 [3,17,26] [90,8,3]
42 10 996 (8/0/92) 71.8 [50.7,94.2] 6 14976 [4,25,39] [76,10,14] 77.5 [48.9,110.9] 4 20581 [4,26,39] [77,13,10] 94.7 [72.1,113.8] 0 17321 [7,17,31] [0,51,49] 119 [80.3,146.3] 0 27599 [7,18,32] [0,57,43] 1207.7* [293.1,1800] 0 43101 [4,22,38] [91,3,6] 1595.5* [580.4,1800] 0 47026 [4,22,39] [93,4,3]
56 1 1970 (7/0/93) 302.2 [302.2,302.2] 1 39587 [4,29,53] [75,7,17] 347.4 [347.4,347.4] 0 63317 [4,31,52] [78,9,12] 608.2 [608.2,608.2] 0 51547 [7,20,42] [0,42,58] 784.1 [784.1,784.1] 0 90775 [7,21,46] [0,50,50] 1800* [1800,1800] 0 26956 [4,14,18] [89,0,10] 1800* [1800,1800] 0 83684 [5,27,53] [94,3,3]
21 723 (9/0/91) 52.6 [6.2,302.2] 10 10520 [4,23,33] [76,13,12] 57.7 [6.1,347.4] 7 14657 [4,23,34] [76,15,9] 78.5 [5.9,608.2] 3 12378 [7,15,27] [0,58,42] 98.5 [6.8,784.1] 1 19498 [7,16,28] [0,63,37] 676.7* [18.3,1800] 0 24740 [4,19,32] [90,4,6] 868* [25.3,1800] 0 29596 [4,20,34] [92,6,3]

Detailed results for each instance

Detailed results for each instance can be generated using instance.Rmd. The report is already generated for some of the instances (see links in the table with input statistics). Some instances that might be of interest:

Forget, N., L. R. Nielsen, and S. L Gadegaard. 2020. “Computational Results (All Instances).” Aarhus University. https://mcdmsociety.github.io/MOrepo-Forget20/report.html.

Lohne, A., and B. Weibing. n.d. “Bensolve.” http://www.bensolve.org/.

Relund Nielsen, Lars. 2020. GMOIP: Tools for 2D and 3D Plots of Single and Multi-Objective Linear/Integer Programming Models. https://github.com/relund/gMOIP/.

Citation

For attribution, please cite this work as

Forget, et al. (2020, Aug. 27). Computational results (preliminary version). Retrieved from https://mcdmsociety.github.io/MOrepo-Forget20/report_paper.html

BibTeX citation

@misc{forget2020computational,
  author = {Forget, Nicolas and Nielsen, Lars Relund and Gadegaard, Sune Lauth},
  title = {Computational results (preliminary version)},
  url = {https://mcdmsociety.github.io/MOrepo-Forget20/report_paper.html},
  year = {2020}
}